Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Peteish13 #739

Merged
merged 310 commits into from
Nov 18, 2024
Merged

Peteish13 #739

merged 310 commits into from
Nov 18, 2024

Conversation

dirkgr
Copy link
Member

@dirkgr dirkgr commented Oct 21, 2024

  • Peteish13 configs
  • More options for torch.compile()
  • Apply compile() to one block at a time.
  • Fixes to run in the Google cloud
  • Scripts to run on the Augusta cluster

@dirkgr dirkgr requested review from epwalsh and 2015aroras November 15, 2024 19:49
@dirkgr dirkgr marked this pull request as ready for review November 15, 2024 19:57
@@ -1036,6 +1036,10 @@ def eval(self) -> Dict[str, Any]:

del eval_batches

# Eval compiles a bunch more versions, and the result is terrible. This way we get back to zero.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you that the result is terrible?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so this prompted me to look into this a bit more and I think I've found a better solution: just mark the model input sizes as dynamic. I tested this out in OLMo-core and it appears to work well.
allenai/OLMo-core#105

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it compiles a bunch of versions for different batch sizes, because that's how we call it during eval, and then they stick around. In all of my early runs I had high tps until the first eval, and then low tps afterwards. This is what fixed it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried dynamic and it was bad. I don't remember the way in which it was bad, but it didn't work. That's why I added that version in the first place.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, oh well. I tested with nightly so maybe it's just better now with recent compiler advances.

@dirkgr dirkgr merged commit 7e81a6c into main Nov 18, 2024
12 checks passed
@dirkgr dirkgr deleted the peteish13-augusta branch November 18, 2024 18:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants